Convergence Rates for Learning Linear Operators from Noisy Data

نویسندگان

چکیده

This paper studies the learning of linear operators between infinite-dimensional Hilbert spaces. The training data comprises pairs random input vectors in a space and their noisy images under an unknown self-adjoint operator. Assuming that operator is diagonalizable known basis, this work solves equivalent inverse problem estimating operator’s eigenvalues given data. Adopting Bayesian approach, theoretical analysis establishes posterior contraction rates infinite limit with Gaussian priors are not directly linked to forward map problem. main results also include learning-theoretic generalization error guarantees for wide range distribution shifts. These convergence quantify effects smoothness true eigenvalue decay or growth, compact unbounded operators, respectively, on sample complexity. Numerical evidence supports theory diagonal nondiagonal settings.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence rates for linear

In this work, we examine a finite-dimensional linear inverse problem where the measurements are disturbed by an additive normal noise. The problem is solved both in the frequentist and in the Bayesian frameworks. Convergence of the used methods when the noise tends to zero is studied in the Ky Fan metric. The obtained convergence rate results and parameter choice rules are of a similar structur...

متن کامل

Rates of A-statistical convergence of positive linear operators

In the classical summability setting rates of summation have been introduced in several ways (see, e.g., [10], [21], [22]). The concept of statistical rates of convergence, for nonvanishing two null sequences, is studied in [13]. Unfortunately no single de...nition seems to have become the “standard” for the comparison of rates of summability transforms. The situation becomes even more uncharte...

متن کامل

Learning probabilistic planning operators from noisy observations

Building agents which can learn to act autonomously in the world is an important challenge for artificial intelligence. While autonomous agents often have to operate in noisy, uncertain worlds, current methods to learn action models from agents’ experiences typically assume fully deterministic worlds. This paper presents a noise-tolerant approach to learning probabilistic planning operators fro...

متن کامل

Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators

We investigate convergence rates of Tikhonov regularization for nonlinear ill-posed problems when both the right-hand side and the operator are corrupted by noise. Two models of operator noise are considered, namely uniform noise bounds and point-wise noise bounds. We derive convergence rates for both noise models in Hilbert and in Banach spaces. These results extend existing results where the ...

متن کامل

Iterative Concept Learning from Noisy Data Iterative Concept Learning from Noisy Data

In the present paper, we study iterative learning of indexable concept classes from noisy data. We distinguish between learning from positive data only and learning from positive and negative data; synonymously, learning from text and informant, respectively. Following 20], a noisy text (a noisy informant) for some target concept contains every correct data item innnitely often while in additio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM/ASA Journal on Uncertainty Quantification

سال: 2023

ISSN: ['2166-2525']

DOI: https://doi.org/10.1137/21m1442942